高保真,基于AI的模拟课堂系统使教师能够排练有效的教学策略。但是,对话导向的开放式对话,例如教学关于规模因素的教学可能难以模仿。本文建立了一个基于文本的互动会话代理,以帮助教师根据着名的教学质量评估来练习数学质疑技能。我们采取了一种以人为本的设计来设计我们的系统,依靠深度学习,不确定量化和自然语言处理的进步,同时承认对会话代理的局限性进行特定的教学需求。在模拟期间直接使用专家输入,我们展示了如何实现谈话成功率和高用户满意度。
translated by 谷歌翻译
大型研究展示了教师质疑策略如何改善学生学习结果。然而,开发新方案是挑战,因为缺乏特定情景的培训数据以及与标签相关的成本。本文介绍了基于AI的高保真度,级教室模拟器,帮助教师排练基于研究的数学质疑技巧。使用人类循环方法,我们收集了一个高质量的训练数据集,用于数学质疑方案。利用最近的不确定性量化的进步,我们评估了我们的可用性的会话代理,并分析了纳入人类循环方法进行数据收集和系统评估的实用性,以获得数学质疑场景。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
从观察数据中推断出因果关系很少直接,但是在高维度中,问题尤其困难。对于这些应用,因果发现算法通常需要参数限制或极端稀疏限制。我们放松这些假设,并专注于一个重要但更专业的问题,即在已知的变量子中恢复因果秩序,这些变量已知会从某些(可能很大的)混杂的协变量(即$ \ textit {Confounder Blanset} $)中降下。这在许多环境中很有用,例如,在研究具有背景信息的遗传数据的动态生物分子子系统时。在一个称为$ \ textit {混杂的毯子原理} $的结构假设下,我们认为这对于在高维度中的可拖动因果发现至关重要,我们的方法可容纳低或高稀疏性的图形,同时保持多项式时间复杂性。我们提出了一种结构学习算法,相对于所谓的$ \ textit {Lazy Oracle} $,该算法是合理且完整的。我们设计了线性和非线性系统有限样本误差控制的推理过程,并在一系列模拟和现实世界数据集上演示了我们的方法。随附的$ \ texttt {r} $ package,$ \ texttt {cbl} $可从$ \ texttt {cran} $获得。
translated by 谷歌翻译
公平是世界各地的文明可以观察到的主要社会价值。这表明这是社会协议,通常用文本描述,例如合同。然而,尽管存在普遍性,但描述了描述社会法案的文本的公平度量仍然存在。为了解决这个问题,我们会返回基于第一个校长的问题来考虑问题。我们利用社会心理学文献而不是使用规则或模板来确定人类在进行公平评估时使用的主要因素。然后,我们尝试将这些单词嵌入式数字化为一个多维句子级公平感知向量,以用作这些公平感知的近似。该方法利用Word Embeddings内的Pro-社会偏见,我们获得F1 = 81.0。基于所述公平逼近向量的PCA和ML使用PCA和M1产生的第二种方法产生86.2的F1得分。我们详细介绍了方法中可以制作的改进,以将句子嵌入到公平性的子空间表示的投影。
translated by 谷歌翻译
灵巧的操纵仍然是机器人技术中的一个空缺问题。为了协调研究界为解决这个问题的努力,我们提出了共同的基准。我们设计和构建了机器人平台,该平台托管在MPI上供智能系统托管,可以远程访问。每个平台由三个能够敏捷物体操纵的机器人手指组成。用户能够通过提交自动执行的代码(类似于计算群集)来远程控制平台。使用此设置,i)我们举办机器人竞赛,来自世界任何地方的团队访问我们的平台以应对具有挑战性的任务ii)我们发布了在这些比赛中收集的数据集(包括数百个机器人小时),而我们为研究人员提供了访问自己项目的这些平台。
translated by 谷歌翻译
Dexterous操作是机器人中的一个具有挑战性和重要问题。虽然数据驱动方法是一个有希望的方法,但由于流行方法的样本效率低,当前基准测试需要模拟或广泛的工程支持。我们为Trifinger系统提供基准,这是一个开源机器人平台,用于灵巧操纵和2020年真正的机器人挑战的重点。在挑战中取得成功的基准方法可以一般被描述为结构性政策,因为它们结合了经典机器人和现代政策优化的元素。这种诱导偏差的包含促进样品效率,可解释性,可靠性和高性能。该基准测试的关键方面是验证跨模拟和实际系统的基线,对每个解决方案的核心特征进行彻底消融研究,以及作为操纵基准的挑战的回顾性分析。本工作的代码和演示视频可以在我们的网站上找到(https://sites.google.com/view/benchmark-rrc)。
translated by 谷歌翻译
Humans are spectacular reinforcement learners, constantly learning from and adjusting to experience and feedback. Unfortunately, this doesn't necessarily mean humans are fast learners. When tasks are challenging, learning can become unacceptably slow. Fortunately, humans do not have to learn tabula rasa, and learning speed can be greatly increased with learning aids. In this work we validate a new type of learning aid -- reward shaping for humans via inverse reinforcement learning (IRL). The goal of this aid is to increase the speed with which humans can learn good policies for specific tasks. Furthermore this approach compliments alternative machine learning techniques such as safety features that try to prevent individuals from making poor decisions. To achieve our results we first extend a well known IRL algorithm via kernel methods. Afterwards we conduct two human subjects experiments using an online game where players have limited time to learn a good policy. We show with statistical significance that players who receive our learning aid are able to approach desired policies more quickly than the control group.
translated by 谷歌翻译
Designing experiments often requires balancing between learning about the true treatment effects and earning from allocating more samples to the superior treatment. While optimal algorithms for the Multi-Armed Bandit Problem (MABP) provide allocation policies that optimally balance learning and earning, they tend to be computationally expensive. The Gittins Index (GI) is a solution to the MABP that can simultaneously attain optimality and computationally efficiency goals, and it has been recently used in experiments with Bernoulli and Gaussian rewards. For the first time, we present a modification of the GI rule that can be used in experiments with exponentially-distributed rewards. We report its performance in simulated 2- armed and 3-armed experiments. Compared to traditional non-adaptive designs, our novel GI modified design shows operating characteristics comparable in learning (e.g. statistical power) but substantially better in earning (e.g. direct benefits). This illustrates the potential that designs using a GI approach to allocate participants have to improve participant benefits, increase efficiencies, and reduce experimental costs in adaptive multi-armed experiments with exponential rewards.
translated by 谷歌翻译
Quadruped robots are currently used in industrial robotics as mechanical aid to automate several routine tasks. However, presently, the usage of such a robot in a domestic setting is still very much a part of the research. This paper discusses the understanding and virtual simulation of such a robot capable of detecting and understanding human emotions, generating its gait, and responding via sounds and expression on a screen. To this end, we use a combination of reinforcement learning and software engineering concepts to simulate a quadruped robot that can understand emotions, navigate through various terrains and detect sound sources, and respond to emotions using audio-visual feedback. This paper aims to establish the framework of simulating a quadruped robot that is emotionally intelligent and can primarily respond to audio-visual stimuli using motor or audio response. The emotion detection from the speech was not as performant as ERANNs or Zeta Policy learning, still managing an accuracy of 63.5%. The video emotion detection system produced results that are almost at par with the state of the art, with an accuracy of 99.66%. Due to its "on-policy" learning process, the PPO algorithm was extremely rapid to learn, allowing the simulated dog to demonstrate a remarkably seamless gait across the different cadences and variations. This enabled the quadruped robot to respond to generated stimuli, allowing us to conclude that it functions as predicted and satisfies the aim of this work.
translated by 谷歌翻译